In [1]:
import networkx as nx
import matplotlib.pyplot as plt
from collections import Counter
from custom import custom_funcs as cf
import warnings
warnings.filterwarnings('ignore')
from circos import CircosPlot


%load_ext autoreload
%autoreload 2
%matplotlib inline

Load Data

We will load the sociopatterns network data for this notebook. From the Konect website:

This network describes the face-to-face behavior of people during the exhibition INFECTIOUS: STAY AWAY in 2009 at the Science Gallery in Dublin. Nodes represent exhibition visitors; edges represent face-to-face contacts that were active for at least 20 seconds. Multiple edges between two nodes are possible and denote multiple contacts. The network contains the data from the day with the most interactions.


In [2]:
# Load the sociopatterns network data.
#G = cf.load_sociopatterns_network()
G=nx.read_gpickle('Synthetic Social Network.pkl')

Hubs: How do we evaluate the importance of some individuals in a network?

Within a social network, there will be certain individuals which perform certain important functions. For example, there may be hyper-connected individuals who are connected to many, many more people. They would be of use in the spreading of information. Alternatively, if this were a disease contact network, identifying them would be useful in stopping the spread of diseases. How would one identify these people?

Approach 1: Neighbors

One way we could compute this is to find out the number of people an individual is conencted to. NetworkX let's us do this by giving us a G.neighbors(node) function.


In [3]:
# Let's find out the number of neighbors that individual #7 has.
len(G.neighbors(7))


Out[3]:
2

In [4]:
G.nodes(data=True)


Out[4]:
[(0, {'age': 20, 'sex': 'Male'}),
 (1, {'age': 21, 'sex': 'Female'}),
 (2, {'age': 19, 'sex': 'Male'}),
 (3, {'age': 29, 'sex': 'Female'}),
 (4, {'age': 30, 'sex': 'Male'}),
 (5, {'age': 26, 'sex': 'Female'}),
 (6, {'age': 21, 'sex': 'Male'}),
 (7, {'age': 17, 'sex': 'Female'}),
 (8, {'age': 21, 'sex': 'Male'}),
 (9, {'age': 14, 'sex': 'Male'}),
 (10, {'age': 23, 'sex': 'Male'}),
 (11, {'age': 17, 'sex': 'Female'}),
 (12, {'age': 19, 'sex': 'Male'}),
 (13, {'age': 27, 'sex': 'Female'}),
 (14, {'age': 29, 'sex': 'Female'}),
 (15, {'age': 14, 'sex': 'Male'}),
 (16, {'age': 18, 'sex': 'Female'}),
 (17, {'age': 21, 'sex': 'Female'}),
 (18, {'age': 19, 'sex': 'Male'}),
 (19, {'age': 19, 'sex': 'Female'}),
 (20, {'age': 19, 'sex': 'Female'}),
 (21, {'age': 21, 'sex': 'Male'}),
 (22, {'age': 30, 'sex': 'Female'}),
 (23, {'age': 25, 'sex': 'Female'}),
 (24, {'age': 13, 'sex': 'Male'}),
 (25, {'age': 24, 'sex': 'Female'}),
 (26, {'age': 23, 'sex': 'Male'}),
 (27, {'age': 21, 'sex': 'Male'}),
 (28, {'age': 29, 'sex': 'Female'}),
 (29, {'age': 25, 'sex': 'Male'})]

In [5]:
G.edges(data=True)


Out[5]:
[(0, 10, {'date': datetime.datetime(2011, 6, 7, 0, 0)}),
 (0, 19, {'date': datetime.datetime(2011, 2, 12, 0, 0)}),
 (0, 12, {'date': datetime.datetime(2006, 8, 28, 0, 0)}),
 (1, 4, {'date': datetime.datetime(2009, 11, 8, 0, 0)}),
 (1, 2, {'date': datetime.datetime(2010, 8, 5, 0, 0)}),
 (1, 3, {'date': datetime.datetime(2005, 2, 3, 0, 0)}),
 (1, 12, {'date': datetime.datetime(2003, 3, 17, 0, 0)}),
 (1, 29, {'date': datetime.datetime(2005, 1, 15, 0, 0)}),
 (2, 16, {'date': datetime.datetime(2002, 5, 27, 0, 0)}),
 (2, 3, {'date': datetime.datetime(2009, 8, 13, 0, 0)}),
 (2, 6, {'date': datetime.datetime(2006, 1, 12, 0, 0)}),
 (2, 19, {'date': datetime.datetime(2010, 1, 6, 0, 0)}),
 (3, 8, {'date': datetime.datetime(2010, 6, 22, 0, 0)}),
 (3, 6, {'date': datetime.datetime(2009, 3, 20, 0, 0)}),
 (3, 23, {'date': datetime.datetime(2003, 11, 9, 0, 0)}),
 (4, 19, {'date': datetime.datetime(2007, 12, 4, 0, 0)}),
 (4, 28, {'date': datetime.datetime(2009, 5, 22, 0, 0)}),
 (6, 23, {'date': datetime.datetime(2011, 3, 4, 0, 0)}),
 (7, 24, {'date': datetime.datetime(2004, 9, 24, 0, 0)}),
 (7, 25, {'date': datetime.datetime(2009, 3, 21, 0, 0)}),
 (8, 17, {'date': datetime.datetime(2005, 11, 16, 0, 0)}),
 (8, 22, {'date': datetime.datetime(2010, 1, 22, 0, 0)}),
 (9, 24, {'date': datetime.datetime(2008, 12, 2, 0, 0)}),
 (9, 17, {'date': datetime.datetime(2009, 10, 11, 0, 0)}),
 (9, 11, {'date': datetime.datetime(2005, 4, 3, 0, 0)}),
 (10, 11, {'date': datetime.datetime(2005, 2, 6, 0, 0)}),
 (10, 21, {'date': datetime.datetime(2007, 1, 21, 0, 0)}),
 (11, 14, {'date': datetime.datetime(2010, 4, 28, 0, 0)}),
 (12, 19, {'date': datetime.datetime(2007, 12, 17, 0, 0)}),
 (12, 29, {'date': datetime.datetime(2008, 8, 27, 0, 0)}),
 (13, 16, {'date': datetime.datetime(2005, 5, 14, 0, 0)}),
 (13, 24, {'date': datetime.datetime(2006, 5, 7, 0, 0)}),
 (13, 14, {'date': datetime.datetime(2011, 3, 19, 0, 0)}),
 (14, 17, {'date': datetime.datetime(2008, 10, 17, 0, 0)}),
 (14, 25, {'date': datetime.datetime(2002, 6, 11, 0, 0)}),
 (15, 24, {'date': datetime.datetime(2007, 9, 2, 0, 0)}),
 (15, 28, {'date': datetime.datetime(2008, 3, 6, 0, 0)}),
 (16, 17, {'date': datetime.datetime(2002, 5, 20, 0, 0)}),
 (16, 19, {'date': datetime.datetime(2005, 8, 20, 0, 0)}),
 (17, 19, {'date': datetime.datetime(2006, 10, 13, 0, 0)}),
 (19, 22, {'date': datetime.datetime(2011, 11, 4, 0, 0)}),
 (19, 27, {'date': datetime.datetime(2009, 7, 27, 0, 0)}),
 (20, 27, {'date': datetime.datetime(2004, 1, 27, 0, 0)}),
 (20, 23, {'date': datetime.datetime(2007, 12, 3, 0, 0)}),
 (21, 27, {'date': datetime.datetime(2007, 2, 1, 0, 0)}),
 (21, 26, {'date': datetime.datetime(2006, 12, 14, 0, 0)}),
 (25, 28, {'date': datetime.datetime(2007, 2, 15, 0, 0)}),
 (26, 29, {'date': datetime.datetime(2006, 12, 19, 0, 0)})]

Exercise

Can you create a ranked list of the importance of each individual, based on the number of neighbors they have?

Hint: One suggested output would be a list of tuples, where the first element in each tuple is the node ID (an integer number), and the second element is the number of neighbors that it has.

Hint: Python's sorted(iterable, key=lambda x:...., reverse=True) function may be of help here.


In [6]:
sorted([(n,G.neighbors(n)) for n in G.nodes()], key=lambda x: len(x[1]), reverse=True)


Out[6]:
[(19, [0, 16, 2, 4, 22, 17, 27, 12]),
 (1, [4, 2, 3, 12, 29]),
 (2, [16, 1, 3, 6, 19]),
 (3, [8, 1, 2, 6, 23]),
 (17, [8, 9, 19, 14, 16]),
 (12, [0, 1, 19, 29]),
 (14, [17, 11, 13, 25]),
 (16, [17, 2, 19, 13]),
 (24, [9, 7, 13, 15]),
 (0, [10, 19, 12]),
 (4, [1, 19, 28]),
 (6, [2, 3, 23]),
 (8, [17, 3, 22]),
 (9, [24, 17, 11]),
 (10, [0, 11, 21]),
 (11, [9, 10, 14]),
 (13, [16, 24, 14]),
 (21, [10, 27, 26]),
 (23, [3, 20, 6]),
 (25, [28, 14, 7]),
 (27, [19, 20, 21]),
 (28, [25, 4, 15]),
 (29, [1, 26, 12]),
 (7, [24, 25]),
 (15, [24, 28]),
 (20, [27, 23]),
 (22, [8, 19]),
 (26, [21, 29]),
 (5, []),
 (18, [])]

Approach 2: Degree Centrality

The number of other nodes that one node is connected to is a measure of its centrality. NetworkX implements a degree centrality, which is defined as the number of neighbors that a node has normalized to the number of individuals it could be connected to in the entire graph. This is accessed by using nx.degree_centrality(G)


In [7]:
nx.degree_centrality(G)


Out[7]:
{0: 0.10344827586206896,
 1: 0.1724137931034483,
 2: 0.1724137931034483,
 3: 0.1724137931034483,
 4: 0.10344827586206896,
 5: 0.0,
 6: 0.10344827586206896,
 7: 0.06896551724137931,
 8: 0.10344827586206896,
 9: 0.10344827586206896,
 10: 0.10344827586206896,
 11: 0.10344827586206896,
 12: 0.13793103448275862,
 13: 0.10344827586206896,
 14: 0.13793103448275862,
 15: 0.06896551724137931,
 16: 0.13793103448275862,
 17: 0.1724137931034483,
 18: 0.0,
 19: 0.27586206896551724,
 20: 0.06896551724137931,
 21: 0.10344827586206896,
 22: 0.06896551724137931,
 23: 0.10344827586206896,
 24: 0.13793103448275862,
 25: 0.10344827586206896,
 26: 0.06896551724137931,
 27: 0.10344827586206896,
 28: 0.10344827586206896,
 29: 0.10344827586206896}

If you inspect the dictionary closely, you will find that node 51 is the one that has the highest degree centrality, just as we had measured by counting the number of neighbors.

There are other measures of centrality, namely betweenness centrality, flow centrality and load centrality. You can take a look at their definitions on the NetworkX API docs and their cited references. You can also define your own measures if those don't fit your needs, but that is an advanced topic that won't be dealt with here.

The NetworkX API docs that document the centrality measures are here: http://networkx.readthedocs.io/en/networkx-1.11/reference/algorithms.centrality.html?highlight=centrality#module-networkx.algorithms.centrality

Exercises

The following exercises are designed to get you familiar with the concept of "distribution of metrics" on a graph.

  1. Can you create a histogram of the distribution of degree centralities?
  2. Can you create a histogram of the distribution of number of neighbors?
  3. Can you create a scatterplot of the degree centralities against number of neighbors?
  4. If I have n nodes, then how many possible edges are there in total, assuming self-edges are allowed? What if self-edges are not allowed?

Hint: You may want to use:

plt.hist(list_of_values)

and

plt.scatter(x_values, y_values)

If you know the Matplotlib API, feel free to get fancy :).


In [8]:
# Possible Answers:
fig = plt.figure(0)
degree_centralities = [dc for n, dc in nx.degree_centrality(G).items()]
plt.hist(degree_centralities)
plt.title('Degree Centralities')


Out[8]:
<matplotlib.text.Text at 0x25183615da0>

In [12]:
fig = plt.figure(1)
neighbors =  [len(G.neighbors(n)) for n in G.nodes()]
plt.hist(neighbors)
plt.title('Number of Neighbors')


Out[12]:
<matplotlib.text.Text at 0x25184196518>

In [14]:
fig = plt.figure(2)
plt.scatter(degree_centralities, neighbors)
plt.xlabel('Degree Centralities')
plt.ylabel('Number of Neighbors')


Out[14]:
<matplotlib.text.Text at 0x25184204860>

Exercise

Before we move on to paths in a network, see if you can use the Circos plot to visualize the network. Sort the nodes by their IDs, which correspond to the order in which they appeared in the graph.


In [19]:
from circos import CircosPlot
import numpy as np

nodes = G.nodes()
edges = G.edges()
edgeprops = dict(alpha=0.5)  # set the alpha value to 0.1
nodecolor = plt.cm.viridis(np.arange(len(nodes)) / len(nodes))  # be sure to use viridis!

In [20]:
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
plt.savefig('images/sociopatterns.png', dpi=300)


What can you deduce about the structure of the network, based on this visualization?

Nodes are sorted by ID. Nodes are more connected to proximal rather than distal nodes. The data are based on people streaming through an enclosed space, so it makes sense that people are mostly connected to others proximal in order, but occasionally some oddballs stick around.

Paths in a Network

Graph traversal is akin to walking along the graph, node by node, restricted by the edges that connect the nodes. Graph traversal is particularly useful for understanding the local structure (e.g. connectivity, retrieving the exact relationships) of certain portions of the graph and for finding paths that connect two nodes in the network.

Using the synthetic social network, we will figure out how to answer the following questions:

  1. How long will it take for a message to spread through this group of friends? (making some assumptions, of course)
  2. How do we find the shortest path to get from individual A to individual B?

Shortest Path

Let's say we wanted to find the shortest path between two nodes. How would we approach this? One approach is what one would call a breadth-first search (http://en.wikipedia.org/wiki/Breadth-first_search). While not necessarily the fastest, it is the easiest to conceptualize.

The approach is essentially as such:

  1. Begin with a queue of the starting node.
  2. Add the neighbors of that node to the queue.
    1. If destination node is present in the queue, end.
    2. If destination node is not present, proceed.
  3. For each node in the queue:
    1. Remove node from the queue.
    2. Add neighbors of the node to the queue. Check if destination node is present or not.
    3. If destination node is present, end.
    4. If destination node is not present, continue.

Exercise

Try implementing this algorithm in a function called path_exists(node1, node2, G).

The function should take in two nodes, node1 and node2, and the graph G that they belong to, and return a Boolean that indicates whether a path exists between those two nodes or not. For convenience, also print out whether a path exists or not between the two nodes.


In [22]:
def path_exists(node1, node2, G):
    """
    This function checks whether a path exists between two nodes (node1, node2) in graph G.
    
    Special thanks to @ghirlekar for suggesting that we keep track of the "visited nodes" to
    prevent infinite loops from happening. 
    
    Reference: https://github.com/ericmjl/Network-Analysis-Made-Simple/issues/3
    """
    visited_nodes = set()
    queue = [node1]
    # Fill in code below
    for node in queue:
        neighbors = G.neighbors(node)
        if node2 in neighbors:
            print('Path exists between nodes {0} and {1}'.format(node1, node2))
            return True
            break
        else:
            queue.remove(node)
            visited_nodes.add(node)
            queue.extend([n for n in neighbors if n not in visited_nodes])
        if len(queue) == 0:
            print('Path does not exist between nodes {0} and {1}'.format(node1, node2))
            return False

In [25]:
# Test your answer below
def test_path_exists():
    print(path_exists(18, 5, G))
    print(path_exists(29, 26, G))
    
test_path_exists()


Path does not exist between nodes 18 and 5
False
Path exists between nodes 29 and 26
True

If you write an algorithm that runs breadth-first, the recursion pattern is likely to follow what we have done above. If you do a depth-first search (i.e. DFS), the recursion pattern is likely to look a bit different. Take it as a challenge exercise to figure out how a DFS looks like.

Meanwhile... thankfully, NetworkX has a function for us to use, titled has_path, so we don't have to implement this on our own. :-)

http://networkx.readthedocs.io/en/networkx-1.11/reference/generated/networkx.algorithms.shortest_paths.generic.has_path.html


In [27]:
nx.has_path(G, 29, 26)


Out[27]:
True

NetworkX also has other shortest path algorithms implemented.

http://networkx.readthedocs.io/en/networkx-1.11/reference/algorithms.shortest_paths.html

We can build upon these to build our own graph query functions. Let's see if we can trace the shortest path from one node to another.

nx.shortest_path(G, source, target) gives us a list of nodes that exist within one of the shortest paths between the two nodes. (Not all paths are guaranteed to be found.)


In [28]:
nx.shortest_path(G, 4, 14)


Out[28]:
[4, 19, 17, 14]

Incidentally, the node list is in order as well.

Exercise

Write a function that extracts the edges in the shortest path between two nodes and puts them into a new graph, and draws it to the screen. It should also return an error if there is no path between the two nodes.

Hint: You may want to use G.subgraph(iterable_of_nodes) to extract just the nodes and edges of interest from the graph G. You might want to use the following lines of code somewhere:

newG = G.subgraph(nodes_of_interest)
nx.draw(newG)

newG will be comprised of the nodes of interest and the edges that connect them.


In [32]:
# Possible Answer:

def extract_path_edges(G, source, target):
    # Check to make sure that a path does exists between source and target.
    if nx.has_path(G, source, target):
        shor = nx.shortest_path(G, source, target)
        newG = G.subgraph(shor)
        return newG

    else:
        raise Exception('Path does not exist between nodes {0} and {1}.'.format(source, target))
        
newG = extract_path_edges(G, 1, 14)
nx.draw(newG, with_labels=True)


Challenge Exercise (at home)

These exercises below are designed to let you become more familiar with manipulating and visualizing subsets of a graph's nodes.

Write a function that extracts only node, its neighbors, and the edges between that node and its neighbors as a new graph. Then, draw the new graph to screen.


In [ ]:
def extract_neighbor_edges(G, node):

    
    
    
    
    
    return newG

fig = plt.figure(0)
newG = extract_neighbor_edges(G, 19)
nx.draw(newG, with_labels=True)

In [ ]:
def extract_neighbor_edges2(G, node):

    
    
    
    
    
    return newG

fig = plt.figure(1)
newG = extract_neighbor_edges2(G, 19)
nx.draw(newG, with_labels=True)

Challenge Exercises (at home)

Let's try some other problems that build on the NetworkX API. Refer to the following for the relevant functions:

http://networkx.readthedocs.io/en/networkx-1.11/reference/algorithms.shortest_paths.html

  1. If we want a message to go from one person to another person, and we assume that the message takes 1 day for the initial step and 1 additional day per step in the transmission chain (i.e. the first step takes 1 day, the second step takes 2 days etc.), how long will the message take to spread from any two given individuals? Write a function to compute this.
  2. What is the distribution of message spread times from person to person? What about chain lengths?

In [ ]:
# Possible answer to Question 1:
# All we need here is the length of the path.

def compute_transmission_time(G, source, target):
    """
    Fill in code below.
    """

    
    
    
    
    
    return __________

compute_transmission_time(G, 14, 4)

In [ ]:
# Possible answer to Question 2:
# We need to know the length of every single shortest path between every pair of nodes.
# If we don't put a source and target into the nx.shortest_path_length(G) function call, then
# we get a dictionary of dictionaries, where all source-->target-->lengths are shown.

lengths = []
times = []

## Fill in code below ##
        
        
        
plt.figure(0)
plt.bar(Counter(lengths).keys(), Counter(lengths).values())

plt.figure(1)
plt.bar(Counter(times).keys(), Counter(times).values())

Hubs Revisited

It looks like individual 19 is an important person of some sorts - if a message has to be passed through the network in the shortest time possible, then usually it'll go through person 19. Such a person has a high betweenness centrality. This is implemented as one of NetworkX's centrality algorithms. Check out the Wikipedia page for a further description.

http://en.wikipedia.org/wiki/Betweenness_centrality


In [ ]:
btws = nx.betweenness_centrality(G, normalized=False)
plt.bar(btws.keys(), btws.values())

Exercise

Plot betweeness centrality against degree centrality for the network data.


In [ ]:
plt.scatter(__________, ____________)
plt.xlabel('degree')
plt.ylabel('betweeness')
plt.title('centrality scatterplot')

Think about it...

From the scatter plot, we can see that the dots don't all fall on the same line. Degree centrality and betweenness centrality don't necessarily correlate. Can you think of scenarios where this is true?

What would be the degree centrality and betweenness centrality of the middle connecting node in the barbell graph below?


In [ ]:
nx.draw(nx.barbell_graph(5, 1))

In [ ]: